social issue
Longitudinal Monitoring of LLM Content Moderation of Social Issues
Dai, Yunlang, Lurie, Emma, Metaxa, Danaé, Friedler, Sorelle A.
Large language models' (LLMs') outputs are shaped by opaque and frequently-changing company content moderation policies and practices. LLM moderation often takes the form of refusal; models' refusal to produce text about certain topics both reflects company policy and subtly shapes public discourse. We introduce AI Watchman, a longitudinal auditing system to publicly measure and track LLM refusals over time, to provide transparency into an important and black-box aspect of LLMs. Using a dataset of over 400 social issues, we audit Open AI's moderation endpoint, GPT-4.1, and GPT-5, and DeepSeek (both in English and Chinese). We find evidence that changes in company policies, even those not publicly announced, can be detected by AI Watchman, and identify company- and model-specific differences in content moderation. We also qualitatively analyze and categorize different forms of refusal. This work contributes evidence for the value of longitudinal auditing of LLMs, and AI Watchman, one system for doing so.
- Europe (0.28)
- Asia > Middle East > Israel (0.05)
- North America > United States > Texas (0.04)
- (9 more...)
- Media > News (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- (12 more...)
Neural embedding of beliefs reveals the role of relative dissonance in human decision-making
Lee, Byunghwee, Aiyappa, Rachith, Ahn, Yong-Yeol, Kwak, Haewoon, An, Jisun
Beliefs serve as the foundation for human cognition and decision-making. They guide individuals in deriving meaning from their lives, shaping their behaviors, and forming social connections. Therefore, a model that encapsulates beliefs and their interrelationships is crucial for quantitatively studying the influence of beliefs on our actions. Despite its importance, research on the interplay between human beliefs has often been limited to a small set of beliefs pertaining to specific issues, with a heavy reliance on surveys or experiments. Here, we propose a method for extracting nuanced relations between thousands of beliefs by leveraging large-scale user participation data from an online debate platform and mapping these beliefs to an embedding space using a fine-tuned large language model (LLM). This belief embedding space effectively encapsulates the interconnectedness of diverse beliefs as well as polarization across various social issues. We discover that the positions within this belief space predict new beliefs of individuals. Furthermore, we find that the relative distance between one's existing beliefs and new beliefs can serve as a quantitative estimate of cognitive dissonance, allowing us to predict new beliefs. Our study highlights how modern LLMs, when combined with collective online records of human beliefs, can offer insights into the fundamental principles that govern human belief formation and decision-making processes.
- Asia > Middle East > Republic of Türkiye > Batman Province > Batman (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > United States > Indiana > Monroe County > Bloomington (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)
- Law (1.00)
- Government (1.00)
Taxonomy and Analysis of Sensitive User Queries in Generative AI Search
Jo, Hwiyeol, Park, Taiwoo, Choi, Nayoung, Kim, Changbong, Kwon, Ohjoon, Jeon, Donghyeon, Lee, Hyunwoo, Lee, Eui-Hyeon, Shin, Kyoungho, Lim, Sun Suk, Kim, Kyungmi, Lee, Jihye, Kim, Sun
Although there has been a growing interest among industries to integrate generative LLMs into their services, limited experiences and scarcity of resources acts as a barrier in launching and servicing large-scale LLM-based conversational services. In this paper, we share our experiences in developing and operating generative AI models within a national-scale search engine, with a specific focus on the sensitiveness of user queries. We propose a taxonomy for sensitive search queries, outline our approaches, and present a comprehensive analysis report on sensitive queries from actual users.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Middle East > Israel (0.04)
- North America > Dominican Republic (0.04)
- (3 more...)
- Health & Medicine (1.00)
- Law (0.95)
- Information Technology > Security & Privacy (0.68)
- Media > News (0.46)
Bias or Diversity? Unraveling Fine-Grained Thematic Discrepancy in U.S. News Headlines
Pan, Jinsheng, Qi, Weihong, Wang, Zichen, Lyu, Hanjia, Luo, Jiebo
There is a broad consensus that news media outlets incorporate ideological biases in their news articles. However, prior studies on measuring the discrepancies among media outlets and further dissecting the origins of thematic differences suffer from small sample sizes and limited scope and granularity. In this study, we use a large dataset of 1.8 million news headlines from major U.S. media outlets spanning from 2014 to 2022 to thoroughly track and dissect the fine-grained thematic discrepancy in U.S. news media. We employ multiple correspondence analysis (MCA) to quantify the fine-grained thematic discrepancy related to four prominent topics - domestic politics, economic issues, social issues, and foreign affairs in order to derive a more holistic analysis. Additionally, we compare the most frequent $n$-grams in media headlines to provide further qualitative insights into our analysis. Our findings indicate that on domestic politics and social issues, the discrepancy can be attributed to a certain degree of media bias. Meanwhile, the discrepancy in reporting foreign affairs is largely attributed to the diversity in individual journalistic styles. Finally, U.S. media outlets show consistency and high similarity in their coverage of economic issues.
- Asia > North Korea (0.28)
- Asia > Russia (0.14)
- North America > United States > New York > New York County > New York City (0.14)
- (11 more...)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Generative AI Takeover 2023!!!. Why is Generative AI everywhere in…
With RunwayML, you can create and experiment with generative models in a matter of minutes, without having to write a single line of code" (RunwayML Website). Demand for innovation: Generative AI has opened up new possibilities and opportunities for innovation in various fields and industries. Generative AI can help to generate new ideas, designs, products, services, etc. that can solve problems or meet needs. In manufacturing, Autodesk and Creo use generative AI to design physical objects. In some cases, they also create those objects through 3D printing or computer-controlled machining and additive manufacturing. NVIDIA's set an example through GauGAN, an AI-powered tool that can transform rough sketches into photorealistic images in real-time. "GauGAN represents a major breakthrough in AI-powered image creation, opening up new possibilities for artists, designers, and creatives.
- Information Technology (0.90)
- Machinery > Industrial Machinery (0.55)
How GPT-3 responds to different publics on climate change and Black Lives Matter: A critical appraisal of equity in conversational AI
Chen, Kaiping, Shao, Anqi, Burapacheep, Jirayu, Li, Yixuan
Autoregressive language models, which use deep learning to produce human-like texts, have become increasingly widespread. Such models are powering popular virtual assistants in areas like smart health, finance, and autonomous driving. While the parameters of these large language models are improving, concerns persist that these models might not work equally for all subgroups in society. Despite growing discussions of AI fairness across disciplines, there lacks systemic metrics to assess what equity means in dialogue systems and how to engage different populations in the assessment loop. Grounded in theories of deliberative democracy and science and technology studies, this paper proposes an analytical framework for unpacking the meaning of equity in human-AI dialogues. Using this framework, we conducted an auditing study to examine how GPT-3 responded to different sub-populations on crucial science and social topics: climate change and the Black Lives Matter (BLM) movement. Our corpus consists of over 20,000 rounds of dialogues between GPT-3 and 3290 individuals who vary in gender, race and ethnicity, education level, English as a first language, and opinions toward the issues. We found a substantively worse user experience with GPT-3 among the opinion and the education minority subpopulations; however, these two groups achieved the largest knowledge gain, changing attitudes toward supporting BLM and climate change efforts after the chat. We traced these user experience divides to conversational differences and found that GPT-3 used more negative expressions when it responded to the education and opinion minority groups, compared to its responses to the majority groups. We discuss the implications of our findings for a deliberative conversational AI system that centralizes diversity, equity, and inclusion.
- North America > United States > Texas > Travis County > Austin (0.14)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.94)
- Health & Medicine (1.00)
- Education (0.94)
- Law > Civil Rights & Constitutional Law (0.61)
'It felt like a job application': the people weeding out first dates with questionnaires
One night this January, as Robert Stewart scrolled through old Hinge matches, he decided to revive a conversation he had begun months ago with a woman on the dating app. After picking up where they left off and exchanging a few pleasantries, Stewart asked if the woman wanted to get on a phone call. He hoped it would lead to an in-person date. "We could do that," the woman answered, but with one caveat. Stewart, who lives in Dallas, clicked on a Google Form the woman sent, titled "Dating Compatibility Q&A".
Where AI and disinformation meet
With the midterm elections just weeks away, the political vitriol and rhetoric are about to heat up. One Arizona State University professor thinks most of the hyperbolic chatter will come from malicious bots spreading racism and hate on social media and in the comments section on news sites. Victor Benjamin, assistant professor of information systems at the W. P. Carey School of Business, has been researching this phenomenon for years. He says the next generation of AI is a reflection of what's going on in society. Benjamin says that as AI learning becomes increasingly dependent on public data sets, such as online conversations, it is vulnerable to influence from cyber adversaries injecting disinformation and social discord. They are swaying public opinion on issues such as presidential elections, public health and social tensions.
Study Reveals Public Concerns About AI Technology
A study published in Al and Ethics found that people in Japan, Germany, and the United States have different concerns over the use of artificial intelligence technology in everyday life. In Japan, people reported more concern about AI used to fight crime. Alternatively, Germans and Americans tended to report more concern over the ethical and social aspects of using AI in entertainment. Around 1,000 respondents from each country were surveyed. Older respondents were found to be the most concerned about the ethical and social issues of AI, whereas those more familiar with AI were more worried about legal implications.
- Asia > Japan (0.58)
- North America > United States (0.31)
- Europe > Germany (0.31)
Decoding the Social Effects Of Media with Machine Learning
What if media were optimized to benefit people? This thought-provoking question is at the core of Harmony Labs' mission. A nonprofit organization headquartered in New York City, Harmony Labs strives to better understand the impact of media on society, and build communities and tools to reform and transform media systems. As Brian Wanieswki, Executive Director at Harmony Labs puts it: "The media systems that we have now, for better or worse, have become outrage machines and sorting machines that put people into groups of like minds. The business incentive structures of these systems are such that the more outrage there is, the more profit there is. Political events across the world in recent years have borne out what these media systems produce, and it's really pretty toxic, and pretty hard to get anything done within. There are all kinds of natural divisions between people, but these media systems tend to reinforce these divisions. So, the first question that we're asking is, What's the scope of this problem? And then, What can we do to solve it?"